Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 19.159
Filtrar
1.
Sci Data ; 11(1): 373, 2024 Apr 12.
Artigo em Inglês | MEDLINE | ID: mdl-38609405

RESUMO

In recent years, the landscape of computer-assisted interventions and post-operative surgical video analysis has been dramatically reshaped by deep-learning techniques, resulting in significant advancements in surgeons' skills, operation room management, and overall surgical outcomes. However, the progression of deep-learning-powered surgical technologies is profoundly reliant on large-scale datasets and annotations. In particular, surgical scene understanding and phase recognition stand as pivotal pillars within the realm of computer-assisted surgery and post-operative assessment of cataract surgery videos. In this context, we present the largest cataract surgery video dataset that addresses diverse requisites for constructing computerized surgical workflow analysis and detecting post-operative irregularities in cataract surgery. We validate the quality of annotations by benchmarking the performance of several state-of-the-art neural network architectures for phase recognition and surgical scene segmentation. Besides, we initiate the research on domain adaptation for instrument segmentation in cataract surgery by evaluating cross-domain instrument segmentation performance in cataract surgery videos. The dataset and annotations are publicly available in Synapse.


Assuntos
Extração de Catarata , Catarata , Aprendizado Profundo , Gravação em Vídeo , Humanos , Benchmarking , Redes Neurais de Computação , Extração de Catarata/métodos
2.
Sci Rep ; 14(1): 8609, 2024 04 14.
Artigo em Inglês | MEDLINE | ID: mdl-38615039

RESUMO

With the advent of large language models, evaluating and benchmarking these systems on important AI problems has taken on newfound importance. Such benchmarking typically involves comparing the predictions of a system against human labels (or a single 'ground-truth'). However, much recent work in psychology has suggested that most tasks involving significant human judgment can have non-trivial degrees of noise. In his book, Kahneman suggests that noise may be a much more significant component of inaccuracy compared to bias, which has been studied more extensively in the AI community. This article proposes a detailed noise audit of human-labeled benchmarks in machine commonsense reasoning, an important current area of AI research. We conduct noise audits under two important experimental conditions: one in a smaller-scale but higher-quality labeling setting, and another in a larger-scale, more realistic online crowdsourced setting. Using Kahneman's framework of noise, our results consistently show non-trivial amounts of level, pattern, and system noise, even in the higher-quality setting, with comparable results in the crowdsourced setting. We find that noise can significantly influence the performance estimates that we obtain of commonsense reasoning systems, even if the 'system' is a human; in some cases, by almost 10 percent. Labeling noise also affects performance estimates of systems like ChatGPT by more than 4 percent. Our results suggest that the default practice in the AI community of assuming and using a 'single' ground-truth, even on problems requiring seemingly straightforward human judgment, may warrant empirical and methodological re-visiting.


Assuntos
Benchmarking , Resolução de Problemas , Humanos , Julgamento , Livros , Idioma
3.
Genome Biol ; 25(1): 97, 2024 Apr 15.
Artigo em Inglês | MEDLINE | ID: mdl-38622738

RESUMO

BACKGROUND: As most viruses remain uncultivated, metagenomics is currently the main method for virus discovery. Detecting viruses in metagenomic data is not trivial. In the past few years, many bioinformatic virus identification tools have been developed for this task, making it challenging to choose the right tools, parameters, and cutoffs. As all these tools measure different biological signals, and use different algorithms and training and reference databases, it is imperative to conduct an independent benchmarking to give users objective guidance. RESULTS: We compare the performance of nine state-of-the-art virus identification tools in thirteen modes on eight paired viral and microbial datasets from three distinct biomes, including a new complex dataset from Antarctic coastal waters. The tools have highly variable true positive rates (0-97%) and false positive rates (0-30%). PPR-Meta best distinguishes viral from microbial contigs, followed by DeepVirFinder, VirSorter2, and VIBRANT. Different tools identify different subsets of the benchmarking data and all tools, except for Sourmash, find unique viral contigs. Performance of tools improved with adjusted parameter cutoffs, indicating that adjustment of parameter cutoffs before usage should be considered. CONCLUSIONS: Together, our independent benchmarking facilitates selecting choices of bioinformatic virus identification tools and gives suggestions for parameter adjustments to viromics researchers.


Assuntos
Benchmarking , Vírus , Metagenoma , Ecossistema , Metagenômica/métodos , Biologia Computacional/métodos , Bases de Dados Genéticas , Vírus/genética
4.
Int J Mol Sci ; 25(7)2024 Mar 28.
Artigo em Inglês | MEDLINE | ID: mdl-38612602

RESUMO

Molecular property prediction is an important task in drug discovery, and with help of self-supervised learning methods, the performance of molecular property prediction could be improved by utilizing large-scale unlabeled dataset. In this paper, we propose a triple generative self-supervised learning method for molecular property prediction, called TGSS. Three encoders including a bi-directional long short-term memory recurrent neural network (BiLSTM), a Transformer, and a graph attention network (GAT) are used in pre-training the model using molecular sequence and graph structure data to extract molecular features. The variational auto encoder (VAE) is used for reconstructing features from the three models. In the downstream task, in order to balance the information between different molecular features, a feature fusion module is added to assign different weights to each feature. In addition, to improve the interpretability of the model, atomic similarity heat maps were introduced to demonstrate the effectiveness and rationality of molecular feature extraction. We demonstrate the accuracy of the proposed method on chemical and biological benchmark datasets by comparative experiments.


Assuntos
Benchmarking , Descoberta de Drogas , Animais , Fontes de Energia Elétrica , Estro , Aprendizado de Máquina Supervisionado
5.
Int J Mol Sci ; 25(7)2024 Mar 29.
Artigo em Inglês | MEDLINE | ID: mdl-38612639

RESUMO

Single-cell RNA sequencing (scRNA-seq) has emerged as a powerful technique for investigating biological heterogeneity at the single-cell level in human systems and model organisms. Recent advances in scRNA-seq have enabled the pooling of cells from multiple samples into single libraries, thereby increasing sample throughput while reducing technical batch effects, library preparation time, and the overall cost. However, a comparative analysis of scRNA-seq methods with and without sample multiplexing is lacking. In this study, we benchmarked methods from two representative platforms: Parse Biosciences (Parse; with sample multiplexing) and 10x Genomics (10x; without sample multiplexing). By using peripheral blood mononuclear cells (PBMCs) obtained from two healthy individuals, we demonstrate that demultiplexed scRNA-seq data obtained from Parse showed similar cell type frequencies compared to 10x data where samples were not multiplexed. Despite relatively lower cell capture affecting library preparation, Parse can detect rare cell types (e.g., plasmablasts and dendritic cells) which is likely due to its relatively higher sensitivity in gene detection. Moreover, a comparative analysis of transcript quantification between the two platforms revealed platform-specific distributions of gene length and GC content. These results offer guidance for researchers in designing high-throughput scRNA-seq studies.


Assuntos
Benchmarking , Leucócitos Mononucleares , Humanos , Biblioteca Gênica , Genômica , Análise de Sequência de RNA
6.
BMC Bioinformatics ; 25(1): 148, 2024 Apr 12.
Artigo em Inglês | MEDLINE | ID: mdl-38609877

RESUMO

Protein toxins are defense mechanisms and adaptations found in various organisms and microorganisms, and their use in scientific research as therapeutic candidates is gaining relevance due to their effectiveness and specificity against cellular targets. However, discovering these toxins is time-consuming and expensive. In silico tools, particularly those based on machine learning and deep learning, have emerged as valuable resources to address this challenge. Existing tools primarily focus on binary classification, determining whether a protein is a toxin or not, and occasionally identifying specific types of toxins. For the first time, we propose a novel approach capable of classifying protein toxins into 27 distinct categories based on their mode of action within cells. To accomplish this, we assessed multiple machine learning techniques and found that an ensemble model incorporating the Light Gradient Boosting Machine and Quadratic Discriminant Analysis algorithms exhibited the best performance. During the tenfold cross-validation on the training dataset, our model exhibited notable metrics: 0.840 accuracy, 0.827 F1 score, 0.836 precision, 0.840 sensitivity, and 0.989 AUC. In the testing stage, using an independent dataset, the model achieved 0.846 accuracy, 0.838 F1 score, 0.847 precision, 0.849 sensitivity, and 0.991 AUC. These results present a powerful next-generation tool called MultiToxPred 1.0, accessible through a web application. We believe that MultiToxPred 1.0 has the potential to become an indispensable resource for researchers, facilitating the efficient identification of protein toxins. By leveraging this tool, scientists can accelerate their search for these toxins and advance their understanding of their therapeutic potential.


Assuntos
Algoritmos , Toxinas Biológicas , Benchmarking , Análise Discriminante , Aprendizado de Máquina , Projetos de Pesquisa
7.
Sensors (Basel) ; 24(7)2024 Mar 23.
Artigo em Inglês | MEDLINE | ID: mdl-38610260

RESUMO

Wearable technology and neuroimaging equipment using photoplethysmography (PPG) have become increasingly popularized in recent years. Several investigations deriving pulse rate variability (PRV) from PPG have demonstrated that a slight bias exists compared to concurrent heart rate variability (HRV) estimates. PPG devices commonly sample at ~20-100 Hz, where the minimum sampling frequency to derive valid PRV metrics is unknown. Further, due to different autonomic innervation, it is unknown if PRV metrics are harmonious between the cerebral and peripheral vasculature. Cardiac activity via electrocardiography (ECG) and PPG were obtained concurrently in 54 participants (29 females) in an upright orthostatic position. PPG data were collected at three anatomical locations: left third phalanx, middle cerebral artery, and posterior cerebral artery using a Finapres NOVA device and transcranial Doppler ultrasound. Data were sampled for five minutes at 1000 Hz and downsampled to frequencies ranging from 20 to 500 Hz. HRV (via ECG) and PRV (via PPG) were quantified and compared at 1000 Hz using Bland-Altman plots and coefficient of variation (CoV). A sampling frequency of ~100-200 Hz was required to produce PRV metrics with a bias of less than 2%, while a sampling rate of ~40-50 Hz elicited a bias smaller than 20%. At 1000 Hz, time- and frequency-domain PRV measures were slightly elevated compared to those derived from HRV (mean bias: ~1-8%). In conjunction with previous reports, PRV and HRV were not surrogate biomarkers due to the different nature of the collected waveforms. Nevertheless, PRV estimates displayed greater validity at a lower sampling rate compared to HRV estimates.


Assuntos
Sistema Nervoso Autônomo , Benchmarking , Feminino , Humanos , Frequência Cardíaca , Correlação de Dados , Eletrocardiografia
8.
Sensors (Basel) ; 24(7)2024 Mar 25.
Artigo em Inglês | MEDLINE | ID: mdl-38610312

RESUMO

Electrocardiogram (ECG) reconstruction from contact photoplethysmogram (PPG) would be transformative for cardiac monitoring. We investigated the fundamental and practical feasibility of such reconstruction by first replicating pioneering work in the field, with the aim of assessing the methods and evaluation metrics used. We then expanded existing research by investigating different cycle segmentation methods and different evaluation scenarios to robustly verify both fundamental feasibility, as well as practical potential. We found that reconstruction using the discrete cosine transform (DCT) and a linear ridge regression model shows good results when PPG and ECG cycles are semantically aligned-the ECG R peak and PPG systolic peak are aligned-before training the model. Such reconstruction can be useful from a morphological perspective, but loses important physiological information (precise R peak location) due to cycle alignment. We also found better performance when personalization was used in training, while a general model in a leave-one-subject-out evaluation performed poorly, showing that a general mapping between PPG and ECG is difficult to derive. While such reconstruction is valuable, as the ECG contains more fine-grained information about the cardiac activity as well as offers a different modality (electrical signal) compared to the PPG (optical signal), our findings show that the usefulness of such reconstruction depends on the application, with a trade-off between morphological quality of QRS complexes and precise temporal placement of the R peak. Finally, we highlight future directions that may resolve existing problems and allow for reliable and robust cross-modal physiological monitoring using just PPG.


Assuntos
Eletrocardiografia , Fotopletismografia , Estudos de Viabilidade , Benchmarking , Eletricidade
9.
Sensors (Basel) ; 24(7)2024 Mar 27.
Artigo em Inglês | MEDLINE | ID: mdl-38610349

RESUMO

Seismocardiography (SCG), a method for measuring heart-induced chest vibrations, is gaining attention as a non-invasive, accessible, and cost-effective approach for cardiac pathologies, diagnosis, and monitoring. This study explores the integration of SCG acquired through smartphone technology by assessing the accuracy of metrics derived from smartphone recordings and their consistency when performed by patients. Therefore, we assessed smartphone-derived SCG's reliability in computing median kinetic energy parameters per record in 220 patients with various cardiovascular conditions. The study involved three key procedures: (1) simultaneous measurements of a validated hardware device and a commercial smartphone; (2) consecutive smartphone recordings performed by both clinicians and patients; (3) patients' self-conducted home recordings over three months. Our findings indicate a moderate-to-high reliability of smartphone-acquired SCG metrics compared to those obtained from a validated device, with intraclass correlation (ICC) > 0.77. The reliability of patient-acquired SCG metrics was high (ICC > 0.83). Within the cohort, 138 patients had smartphones that met the compatibility criteria for the study, with an observed at-home compliance rate of 41.4%. This research validates the potential of smartphone-derived SCG acquisition in providing repeatable SCG metrics in telemedicine, thus laying a foundation for future studies to enhance the precision of at-home cardiac data acquisition.


Assuntos
Doenças Cardiovasculares , Smartphone , Humanos , Reprodutibilidade dos Testes , Fenômenos Físicos , Benchmarking , Doenças Cardiovasculares/diagnóstico
10.
Sensors (Basel) ; 24(7)2024 Mar 29.
Artigo em Inglês | MEDLINE | ID: mdl-38610403

RESUMO

The assessment of fine motor competence plays a pivotal role in neuropsychological examinations for the identification of developmental deficits. Several tests have been proposed for the characterization of fine motor competence, with evaluation metrics primarily based on qualitative observation, limiting quantitative assessment to measures such as test durations. The Placing Bricks (PB) test evaluates fine motor competence across the lifespan, relying on the measurement of time to completion. The present study aims at instrumenting the PB test using wearable inertial sensors to complement PB standard assessment with reliable and objective process-oriented measures of performance. Fifty-four primary school children (27 6-year-olds and 27 7-year-olds) performed the PB according to standard protocol with their dominant and non-dominant hands, while wearing two tri-axial inertial sensors, one per wrist. An ad hoc algorithm based on the analysis of forearm angular velocity data was developed to automatically identify task events, and to quantify phases and their variability. The algorithm performance was tested against video recordings in data from five children. Cycle and Placing durations showed a strong agreement between IMU- and Video-derived measurements, with a mean difference <0.1 s, 95% confidence intervals <50% median phase duration, and very high positive correlation (ρ > 0.9). Analyzing the whole population, significant differences were found for age, as follows: six-year-olds exhibited longer cycle durations and higher variability, indicating a stage of development and potential differences in hand dominance; seven-year-olds demonstrated quicker and less variable performance, aligning with the expected maturation and the refined motor control associated with dominant hand training during the first year of school. The proposed sensor-based approach allowed the quantitative assessment of fine motor competence in children, providing a portable and rapid tool for monitoring developmental progress.


Assuntos
Algoritmos , Benchmarking , Criança , Humanos , Antebraço , Longevidade , Testes Neuropsicológicos
11.
Sensors (Basel) ; 24(7)2024 Apr 08.
Artigo em Inglês | MEDLINE | ID: mdl-38610580

RESUMO

This paper contributes to the development of a Next Generation First Responder (NGFR) communication platform with the key goal of embedding it into a smart city technology infrastructure. The framework of this approach is a concept known as SmartHub, developed by the US Department of Homeland Security. The proposed embedding methodology complies with the standard categories and indicators of smart city performance. This paper offers two practice-centered extensions of the NGFR hub, which are also the main results: first, a cognitive workload monitoring of first responders as a basis for their performance assessment, monitoring, and improvement; and second, a highly sensitive problem of human society, the emergency assistance tools for individuals with disabilities. Both extensions explore various technological-societal dimensions of smart cities, including interoperability, standardization, and accessibility to assistive technologies for people with disabilities. Regarding cognitive workload monitoring, the core result is a novel AI formalism, an ensemble of machine learning processes aggregated using machine reasoning. This ensemble enables predictive situation assessment and self-aware computing, which is the basis of the digital twin concept. We experimentally demonstrate a specific component of a digital twin of an NGFR, a near-real-time monitoring of the NGFR cognitive workload. Regarding our second result, a problem of emergency assistance for individuals with disabilities that originated as accessibility to assistive technologies to promote disability inclusion, we provide the NGFR specification focusing on interactions based on AI formalism and using a unified hub platform. This paper also discusses a technology roadmap using the notion of the Emergency Management Cycle (EMC), a commonly accepted doctrine for managing disasters through the steps of mitigation, preparedness, response, and recovery. It positions the NGFR hub as a benchmark of the smart city emergency service.


Assuntos
Desastres , Serviços Médicos de Emergência , Socorristas , Humanos , Cidades , Benchmarking
12.
Sci Rep ; 14(1): 7697, 2024 04 02.
Artigo em Inglês | MEDLINE | ID: mdl-38565624

RESUMO

The rapid increase in biomedical publications necessitates efficient systems to automatically handle Biomedical Named Entity Recognition (BioNER) tasks in unstructured text. However, accurately detecting biomedical entities is quite challenging due to the complexity of their names and the frequent use of abbreviations. In this paper, we propose BioBBC, a deep learning (DL) model that utilizes multi-feature embeddings and is constructed based on the BERT-BiLSTM-CRF to address the BioNER task. BioBBC consists of three main layers; an embedding layer, a Long Short-Term Memory (Bi-LSTM) layer, and a Conditional Random Fields (CRF) layer. BioBBC takes sentences from the biomedical domain as input and identifies the biomedical entities mentioned within the text. The embedding layer generates enriched contextual representation vectors of the input by learning the text through four types of embeddings: part-of-speech tags (POS tags) embedding, char-level embedding, BERT embedding, and data-specific embedding. The BiLSTM layer produces additional syntactic and semantic feature representations. Finally, the CRF layer identifies the best possible tag sequence for the input sentence. Our model is well-constructed and well-optimized for detecting different types of biomedical entities. Based on experimental results, our model outperformed state-of-the-art (SOTA) models with significant improvements based on six benchmark BioNER datasets.


Assuntos
Idioma , Semântica , Processamento de Linguagem Natural , Benchmarking , Fala
13.
Health Promot Int ; 39(2)2024 Apr 01.
Artigo em Inglês | MEDLINE | ID: mdl-38568732

RESUMO

The climate crisis significantly impacts the health and well-being of older adults, both directly and indirectly. This issue is of growing concern in Canada due to the country's rapidly accelerating warming trend and expanding elderly population. This article serves a threefold purpose: (i) outlining the impacts of the climate crisis on older adults, (ii) providing a descriptive review of existing policies with a specific focus on the Canadian context, and (iii) promoting actionable recommendations. Our review reveals the application of current strategies, including early warning systems, enhanced infrastructure, sustainable urban planning, healthcare access, social support systems, and community engagement, in enhancing resilience and reducing health consequences among older adults. Within the Canadian context, we then emphasize the importance of establishing robust risk metrics and evaluation methods to prepare for and manage the impacts of the climate crisis efficiently. We underscore the value of vulnerability mapping, utilizing geographic information to identify regions where older adults are most at risk. This allows for targeted interventions and resource allocation. We recommend employing a root cause analysis approach to tailor risk response strategies, along with a focus on promoting awareness, readiness, physician training, and fostering collaboration and benchmarking. These suggestions aim to enhance disaster risk management for the well-being and resilience of older adults in the face of the climate crisis.


Assuntos
Planejamento em Desastres , Desastres , Humanos , Idoso , Canadá , Benchmarking , Planejamento de Cidades
14.
PLoS One ; 19(4): e0299360, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38557660

RESUMO

Ovarian cancer is a highly lethal malignancy in the field of oncology. Generally speaking, the segmentation of ovarian medical images is a necessary prerequisite for the diagnosis and treatment planning. Therefore, accurately segmenting ovarian tumors is of utmost importance. In this work, we propose a hybrid network called PMFFNet to improve the segmentation accuracy of ovarian tumors. The PMFFNet utilizes an encoder-decoder architecture. Specifically, the encoder incorporates the ViTAEv2 model to extract inter-layer multi-scale features from the feature pyramid. To address the limitation of fixed window size that hinders sufficient interaction of information, we introduce Varied-Size Window Attention (VSA) to the ViTAEv2 model to capture rich contextual information. Additionally, recognizing the significance of multi-scale features, we introduce the Multi-scale Feature Fusion Block (MFB) module. The MFB module enhances the network's capacity to learn intricate features by capturing both local and multi-scale information, thereby enabling more precise segmentation of ovarian tumors. Finally, in conjunction with our designed decoder, our model achieves outstanding performance on the MMOTU dataset. The results are highly promising, with the model achieving scores of 97.24%, 91.15%, and 87.25% in mACC, mIoU, and mDice metrics, respectively. When compared to several Unet-based and advanced models, our approach demonstrates the best segmentation performance.


Assuntos
Neoplasias Ovarianas , Feminino , Humanos , Neoplasias Ovarianas/diagnóstico por imagem , Benchmarking , Aprendizagem , Oncologia , Processamento de Imagem Assistida por Computador
15.
PLoS One ; 19(4): e0300653, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38557860

RESUMO

Photonic radar, a cornerstone in the innovative applications of microwave photonics, emerges as a pivotal technology for future Intelligent Transportation Systems (ITS). Offering enhanced accuracy and reliability, it stands at the forefront of target detection and recognition across varying weather conditions. Recent advancements have concentrated on augmenting radar performance through high-speed, wide-band signal processing-a direct benefit of modern photonics' attributes such as EMI immunity, minimal transmission loss, and wide bandwidth. Our work introduces a cutting-edge photonic radar system that employs Frequency Modulated Continuous Wave (FMCW) signals, synergized with Mode Division and Wavelength Division Multiplexing (MDM-WDM). This fusion not only enhances target detection and recognition capabilities across diverse weather scenarios, including various intensities of fog and solar scintillations, but also demonstrates substantial resilience against solar noise. Furthermore, we have integrated machine learning techniques, including Decision Tree, Extremely Randomized Trees (ERT), and Random Forest classifiers, to substantially enhance target recognition accuracy. The results are telling: an accuracy of 91.51%, high sensitivity (91.47%), specificity (97.17%), and an F1 Score of 91.46%. These metrics underscore the efficacy of our approach in refining ITS radar systems, illustrating how advancements in microwave photonics can revolutionize traditional methodologies and systems.


Assuntos
Radar , Tempo (Meteorologia) , Reprodutibilidade dos Testes , Benchmarking , Aprendizado de Máquina
16.
J Robot Surg ; 18(1): 153, 2024 Apr 02.
Artigo em Inglês | MEDLINE | ID: mdl-38563887

RESUMO

Robot-assisted partial nephrectomy (RAPN) is a complex and index procedure that urologists need to learn how to perform safely. No validated performance metrics specifically developed for a RAPN training model (TM) exist. A Core Metrics Group specifically adapted human RAPN metrics to be used in a newly developed RAPN TM, explicitly defining phases, steps, errors, and critical errors. A modified Delphi meeting concurred on the face and content validation of the new metrics. One hundred percent consensus was achieved by the Delphi panel on 8 Phases, 32 Steps, 136 Errors and 64 Critical Errors. Two trained assessors evaluated recorded video performances of novice and expert RAPN surgeons executing an emulated RAPN in the newly developed TM. There were no differences in procedure Steps completed by the two groups. Experienced RAPN surgeons made 34% fewer Total Errors than the Novice group. Performance score for both groups was divided at the median score using Total Error scores, into HiError and LoError subgroups. The LowErrs Expert RAPN surgeons group made 118% fewer Total Errors than the Novice HiErrs group. Furthermore, the LowErrs Expert RAPN surgeons made 77% fewer Total Errors than the HiErrs Expert RAPN surgeons. These results established construct and discriminative validity of the metrics. The authors described a novel RAPN TM and its associated performance metrics with evidence supporting their face, content, construct, and discriminative validation. This report and evidence support the implementation of a simulation-based proficiency-based progression (PBP) training program for RAPN.


Assuntos
Procedimentos Cirúrgicos Robóticos , Humanos , Procedimentos Cirúrgicos Robóticos/métodos , Aprendizagem , Benchmarking , Transfusão de Sangue , Nefrectomia
17.
Sci Rep ; 14(1): 7650, 2024 04 01.
Artigo em Inglês | MEDLINE | ID: mdl-38561346

RESUMO

This study presents an advanced metaheuristic approach termed the Enhanced Gorilla Troops Optimizer (EGTO), which builds upon the Marine Predators Algorithm (MPA) to enhance the search capabilities of the Gorilla Troops Optimizer (GTO). Like numerous other metaheuristic algorithms, the GTO encounters difficulties in preserving convergence accuracy and stability, notably when tackling intricate and adaptable optimization problems, especially when compared to more advanced optimization techniques. Addressing these challenges and aiming for improved performance, this paper proposes the EGTO, integrating high and low-velocity ratios inspired by the MPA. The EGTO technique effectively balances exploration and exploitation phases, achieving impressive results by utilizing fewer parameters and operations. Evaluation on a diverse array of benchmark functions, comprising 23 established functions and ten complex ones from the CEC2019 benchmark, highlights its performance. Comparative analysis against established optimization techniques reveals EGTO's superiority, consistently outperforming its counterparts such as tuna swarm optimization, grey wolf optimizer, gradient based optimizer, artificial rabbits optimization algorithm, pelican optimization algorithm, Runge Kutta optimization algorithm (RUN), and original GTO algorithms across various test functions. Furthermore, EGTO's efficacy extends to addressing seven challenging engineering design problems, encompassing three-bar truss design, compression spring design, pressure vessel design, cantilever beam design, welded beam design, speed reducer design, and gear train design. The results showcase EGTO's robust convergence rate, its adeptness in locating local/global optima, and its supremacy over alternative methodologies explored.


Assuntos
Nativos do Alasca , Compressão de Dados , Lagomorpha , Animais , Humanos , Coelhos , Gorilla gorilla , Algoritmos , Benchmarking
18.
BMC Bioinformatics ; 25(1): 140, 2024 Apr 01.
Artigo em Inglês | MEDLINE | ID: mdl-38561679

RESUMO

Drug combination therapy is generally more effective than monotherapy in the field of cancer treatment. However, screening for effective synergistic combinations from a wide range of drug combinations is particularly important given the increase in the number of available drug classes and potential drug-drug interactions. Existing methods for predicting the synergistic effects of drug combinations primarily focus on extracting structural features of drug molecules and cell lines, but neglect the interaction mechanisms between cell lines and drug combinations. Consequently, there is a deficiency in comprehensive understanding of the synergistic effects of drug combinations. To address this issue, we propose a drug combination synergy prediction model based on multi-source feature interaction learning, named MFSynDCP, aiming to predict the synergistic effects of anti-tumor drug combinations. This model includes a graph aggregation module with an adaptive attention mechanism for learning drug interactions and a multi-source feature interaction learning controller for managing information transfer between different data sources, accommodating both drug and cell line features. Comparative studies with benchmark datasets demonstrate MFSynDCP's superiority over existing methods. Additionally, its adaptive attention mechanism graph aggregation module identifies drug chemical substructures crucial to the synergy mechanism. Overall, MFSynDCP is a robust tool for predicting synergistic drug combinations. The source code is available from GitHub at https://github.com/kkioplkg/MFSynDCP .


Assuntos
Benchmarking , Treinamento por Simulação , Combinação de Medicamentos , Quimioterapia Combinada , Linhagem Celular
19.
BMC Musculoskelet Disord ; 25(1): 250, 2024 Apr 01.
Artigo em Inglês | MEDLINE | ID: mdl-38561697

RESUMO

BACKGROUND: Ankle fractures are prevalent injuries that necessitate precise diagnostic tools. Traditional diagnostic methods have limitations that can be addressed using machine learning techniques, with the potential to improve accuracy and expedite diagnoses. METHODS: We trained various deep learning architectures, notably the Adapted ResNet50 with SENet capabilities, to identify ankle fractures using a curated dataset of radiographic images. Model performance was evaluated using common metrics like accuracy, precision, and recall. Additionally, Grad-CAM visualizations were employed to interpret model decisions. RESULTS: The Adapted ResNet50 with SENet capabilities consistently outperformed other models, achieving an accuracy of 93%, AUC of 95%, and recall of 92%. Grad-CAM visualizations provided insights into areas of the radiographs that the model deemed significant in its decisions. CONCLUSIONS: The Adapted ResNet50 model enhanced with SENet capabilities demonstrated superior performance in detecting ankle fractures, offering a promising tool to complement traditional diagnostic methods. However, continuous refinement and expert validation are essential to ensure optimal application in clinical settings.


Assuntos
Fraturas do Tornozelo , Humanos , Fraturas do Tornozelo/diagnóstico por imagem , Benchmarking , Aprendizado de Máquina
20.
Philos Trans R Soc Lond B Biol Sci ; 379(1902): 20230022, 2024 May 27.
Artigo em Inglês | MEDLINE | ID: mdl-38583475

RESUMO

Recent climate change has effectively rewound the climate clock by approximately 120 000 years and is expected to reverse this clock a further 50 Myr by 2100. We aimed to answer two essential questions to better understand the changes in ecosystems worldwide owing to predicted climate change. Firstly, we identify the locations and time frames where novel ecosystems could emerge owing to climate change. Secondly, we aim to determine the extent to which biomes, in their current distribution, will experience an increase in climate-driven ecological novelty. To answer these questions, we analysed three perspectives on how climate changes could result in novel ecosystems in the near term (2100), medium (2200) and long term (2300). These perspectives included identifying areas where climate change could result in new climatic combinations, climate isoclines moving faster than species migration capacity and current environmental patterns being disaggregated. Using these metrics, we determined when and where novel ecosystems could emerge. Our analysis shows that unless rapid mitigation measures are taken, the coverage of novel ecosystems could be over 50% of the land surface by 2100 under all change scenarios. By 2300, the coverage of novel ecosystems could be above 80% of the land surface. At the biome scale, these changes could mean that over 50% of locations could shift towards novel ecosystems, with the majority seeing these changes in the next few decades. Our research shows that the impact of climate change on ecosystems is complex and varied, requiring global action to mitigate and adapt to these changes. This article is part of the theme issue 'Biodiversity dynamics and stewardship in a transforming biosphere'. This article is part of the theme issue 'Ecological novelty and planetary stewardship: biodiversity dynamics in a transforming biosphere'.


Assuntos
Biodiversidade , Ecossistema , Mudança Climática , Adaptação Fisiológica , Benchmarking
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...